aw batch
ML Without the Ops: Running Experiments at Scale with Ploomber on AWS
For the past couple of months, we've chatted with many Data Science and Machine Learning teams to understand their pain points. Of course, there are many of these. Still, the one that surprised me the most is how hard it is to get some simple end-to-end workflow working, partially because vendors often lock teams into complicated solutions that require a lot of setup and maintenance. This blog post will describe a simple architecture that you can use to start building data pipelines in the cloud without sacrificing your favorite tooling or recurring high maintenance costs. The solution involves using our open-source frameworks and AWS Batch.
Facebook uses Amazon EC2 to evaluate the Deepfake Detection Challenge
In October 2019, AWS announced that it was working with Facebook, Microsoft, and the Partnership on AI on the first Deepfake Detection Challenge. Deepfake algorithms are the same as the underlying technology that has given us realistic animation effects in movies and video games. Unfortunately, those same algorithms have been used by bad actors to blur the distinction between reality and fiction. Deepfake videos result from using artificial intelligence to manipulate audio and video to make it appear as though someone did or said something they didn't. For more information about deepfake content, see The Partnership on AI Steering Committee on AI and Media Integrity.
The Dawn of Zendesk's Machine Learning Model Building Platform with AWS Batch
When we worked on Content Cues, one of Zendesk's machine learning products, we encountered the scalability challenge of having to build up to 50k machine learning (ML) models daily. Looking at the data was initially nerve-wracking. This article focuses on the new model building platform we designed and built for Content Cues, and has been running on AWS Batch in production for a few months. From conception to implementation, the process has been a challenging yet rewarding experience for us, and we would like to share our journey with you. This is the first of a 3 part series, covering how we evaluated different technology options (AWS Batch, AWS Sagemaker, Kubernetes, EMR Hadoop/Spark), ultimately deciding on AWS Batch.
How to Run Customized Tensorflow Training in the Cloud
You have your Tensorflow code running locally. Now you want to set it up in a production environment for all that extra GPU Power. There are a couple of alternatives out there. The two more popular managed ML cloud platforms are Google Cloud ML Engine and AWS Sage Maker. They let you quickly deploy your models and train them.
Deep Learning on AWS Batch
GPU instances naturally pair with deep learning as neural network algorithms can take advantage of their massive parallel processing power. AWS provides GPU instance families, such as g2 and p2, which allow customers to run scalable GPU workloads. You can leverage such scalability efficiently with AWS Batch. AWS Batch manages the underlying compute resources on-your behalf, allowing you to focus on modeling tasks without the overhead of resource management. Compute environments (that is, clusters) in AWS Batch are pools of instances in your account, which AWS Batch dynamically scales up and down, provisioning and terminating instances with respect to the numbers of jobs.